Close

1. Identity statement
Reference TypeConference Paper (Conference Proceedings)
Sitesibgrapi.sid.inpe.br
Holder Codeibi 8JMKD3MGPEW34M/46T9EHH
Identifier8JMKD3MGPAW/3RQE3SH
Repositorysid.inpe.br/sibgrapi/2018/09.10.20.04
Last Update2018:09.10.20.04.03 (UTC) administrator
Metadata Repositorysid.inpe.br/sibgrapi/2018/09.10.20.04.03
Metadata Last Update2022:06.14.00.09.29 (UTC) administrator
DOI10.1109/SIBGRAPI.2018.00019
Citation KeyCardenas:2018:MuHuAc
TitleMultimodal Human Action Recognition Based on a Fusion of Dynamic Images using CNN descriptors
FormatOn-line
Year2018
Access Date2024, Apr. 27
Number of Files1
Size2214 KiB
2. Context
AuthorCardenas, Edwin Jonathan Escobedo
AffiliationFederal University of Ouro Preto
EditorRoss, Arun
Gastal, Eduardo S. L.
Jorge, Joaquim A.
Queiroz, Ricardo L. de
Minetto, Rodrigo
Sarkar, Sudeep
Papa, João Paulo
Oliveira, Manuel M.
Arbeláez, Pablo
Mery, Domingo
Oliveira, Maria Cristina Ferreira de
Spina, Thiago Vallin
Mendes, Caroline Mazetto
Costa, Henrique Sérgio Gutierrez
Mejail, Marta Estela
Geus, Klaus de
Scheer, Sergio
e-Mail Addressedu.escobedo88@gmail.com
Conference NameConference on Graphics, Patterns and Images, 31 (SIBGRAPI)
Conference LocationFoz do Iguaçu, PR, Brazil
Date29 Oct.-1 Nov. 2018
PublisherIEEE Computer Society
Publisher CityLos Alamitos
Book TitleProceedings
Tertiary TypeFull Paper
History (UTC)2018-09-10 20:04:03 :: edu.escobedo88@gmail.com -> administrator ::
2022-06-14 00:09:29 :: administrator -> :: 2018
3. Content and structure
Is the master or a copy?is the master
Content Stagecompleted
Transferable1
Version Typefinaldraft
Keywordsaction recognition
dynamic images
RGB-D data
kinect
CNN
AbstractIn this paper, we propose the use of dynamic-images-based approach for action recognition. Specifically, we exploit the multimodal information recorded by a Kinect sensor (RGB-D and skeleton joint data). We combine several ideas from rank pooling and skeleton optical spectra to generate dynamic images to summarize an action sequence into single flow images. We group our dynamic images into five groups: a dynamic color group (DC); a dynamic depth group (DD) and three dynamic skeleton groups (DXY, DYZ, DXZ). As action is composed of different postures along time, we generated N different dynamic images with the main postures for each dynamic group. Next, we applied a pre-trained flow-CNN to extract spatiotemporal features with a max-mean aggregation. The proposed method was evaluated on a public benchmark dataset, the UTD-MHAD, and achieved the state-of-the-art result.
Arrangement 1urlib.net > SDLA > Fonds > SIBGRAPI 2018 > Multimodal Human Action...
Arrangement 2urlib.net > SDLA > Fonds > Full Index > Multimodal Human Action...
doc Directory Contentaccess
source Directory Contentthere are no files
agreement Directory Content
agreement.html 10/09/2018 17:04 1.2 KiB 
4. Conditions of access and use
data URLhttp://urlib.net/ibi/8JMKD3MGPAW/3RQE3SH
zipped data URLhttp://urlib.net/zip/8JMKD3MGPAW/3RQE3SH
Languageen
Target FileMultimodal_Human_Action_Recognition_Based_on_a_Fusion_of_Dynamic_Images_using_CNN_descriptors.pdf
User Groupedu.escobedo88@gmail.com
Visibilityshown
Update Permissionnot transferred
5. Allied materials
Mirror Repositorysid.inpe.br/banon/2001/03.30.15.38.24
Next Higher Units8JMKD3MGPAW/3RPADUS
8JMKD3MGPEW34M/4742MCS
Citing Item Listsid.inpe.br/sibgrapi/2018/09.03.20.37 10
sid.inpe.br/banon/2001/03.30.15.38.24 1
Host Collectionsid.inpe.br/banon/2001/03.30.15.38
6. Notes
Empty Fieldsarchivingpolicy archivist area callnumber contenttype copyholder copyright creatorhistory descriptionlevel dissemination edition electronicmailaddress group isbn issn label lineage mark nextedition notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder schedulinginformation secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url volume


Close